deepSSF landscape predictions

Author

Scott Forrest

Published

February 2, 2025

Abstract

As trained convolution filters can be applied to images (i.e. rasters) of any size, it is possible to generate habitat selection surfaces of any size by using the trained filters from the habitat selection subnetwork. These have been trained to extract the salient features of the input data, and produce higher probability values where an animal is likely to take its next step, which also depends on the time of day and the day of the year. Additionally, these habitat selection filters have been conditioned on the movement process of the animal, as only local covariates that are ‘available’ to the animal are used in the training data, which was the reason for the initial development of step selection functions (Fortin et al. 2005; Rhodes et al. 2005).

When applied to a larger landscape area, the resulting probability surface denotes only habitat selection, and not the expected distribution of animals, as of course when this surface is generated it ignores the movement process, and assumes that an animal could access any point in the landscape at the next step. This is an assumption of resource selection functions (RSFs), and has been called a ‘naive’ estimation by Signer, Fieberg, and Avgar (2017), because it ignores the movement dynamics. Predicting over broader areas should also be used with caution as covariate values that were not encountered during model training may produce inaccurate or misleading results. Acknowledging these assumptions, we include examples of landscape-scale habitat selection to suggest they have some utility. Primarily, the resultant habitat selection map provides a visual representation of what the habitat selection subnetwork has learned from the environmental covariates, and how this changes with respect to the other covariates such as the hour and day of the year. This can be used as a diagnostic tool for further model development, or as an end in itself, as it highlights the features present in the environmental covariates that the model considered to be influential in determining the next-step’s location. From these maps, we can also directly plot the correlation between the covariate values and the habitat selection prediction probability, which represents the marginal contribution of the covariate values.

Import packages

Code
import sys
print(sys.version)  # Print Python version in use

import numpy as np                                      # Array operations
import matplotlib.pyplot as plt                         # Plotting library
import torch                                            # Main PyTorch library
from torch.utils.data import Dataset, DataLoader        # Dataset and batch data loading
from datetime import datetime                           # Date/time utilities
import pandas as pd                                     # Data manipulation
import rasterio                                         # Geospatial raster data

import deepSSF_model                                    # Import the file containing the deepSSF model                                          

# Get today's date
today_date = datetime.today().strftime('%Y-%m-%d')
3.12.5 | packaged by Anaconda, Inc. | (main, Sep 12 2024, 18:18:29) [MSC v.1929 64 bit (AMD64)]

Import the GPS data

We only use this for selecting a spatial extent for the area we want to predict over.

Code
# select the id of that data that the model was trained on
buffalo_id = 2005
n_samples = 10297 # 2005 has 10297 samples

# Specify the path to your CSV file
csv_file_path = f'../buffalo_local_data_id/buffalo_{buffalo_id}_data_df_lag_1hr_n{n_samples}.csv'

# Read the CSV file into a DataFrame
buffalo_df = pd.read_csv(csv_file_path)

# Display the first few rows of the DataFrame
print(buffalo_df.head())
             x_            y_                    t_    id           x1_  \
0  41969.310875 -1.435671e+06  2018-07-25T01:04:23Z  2005  41969.310875   
1  41921.521939 -1.435654e+06  2018-07-25T02:04:39Z  2005  41921.521939   
2  41779.439594 -1.435601e+06  2018-07-25T03:04:17Z  2005  41779.439594   
3  41841.203272 -1.435635e+06  2018-07-25T04:04:39Z  2005  41841.203272   
4  41655.463332 -1.435604e+06  2018-07-25T05:04:27Z  2005  41655.463332   

            y1_           x2_           y2_     x2_cent    y2_cent  ...  \
0 -1.435671e+06  41921.521939 -1.435654e+06  -47.788936  16.857110  ...   
1 -1.435654e+06  41779.439594 -1.435601e+06 -142.082345  53.568427  ...   
2 -1.435601e+06  41841.203272 -1.435635e+06   61.763677 -34.322938  ...   
3 -1.435635e+06  41655.463332 -1.435604e+06 -185.739939  31.003534  ...   
4 -1.435604e+06  41618.651923 -1.435608e+06  -36.811409  -4.438037  ...   

         ta    cos_ta         x_min         x_max         y_min         y_max  \
0  1.367942  0.201466  40706.810875  43231.810875 -1.436934e+06 -1.434409e+06   
1 -0.021429  0.999770  40659.021939  43184.021939 -1.436917e+06 -1.434392e+06   
2  2.994917 -0.989262  40516.939594  43041.939594 -1.436863e+06 -1.434338e+06   
3 -2.799767 -0.942144  40578.703272  43103.703272 -1.436898e+06 -1.434373e+06   
4  0.285377  0.959556  40392.963332  42917.963332 -1.436867e+06 -1.434342e+06   

   s2_index  points_vect_cent  year_t2  yday_t2_2018_base  
0         7               NaN     2018                206  
1         7               NaN     2018                206  
2         7               NaN     2018                206  
3         7               NaN     2018                206  
4         7               NaN     2018                206  

[5 rows x 35 columns]

Importing spatial data

Global layers

NDVI

Code
file_path = '../mapping/cropped rasters/ndvi_2018-19_late_dry.tif'
file_path_ndvi_aug2018 = '../mapping/cropped rasters/ndvi_aug_2018.tif'
file_path_ndvi_feb2019 = '../mapping/cropped rasters/ndvi_feb_2019.tif'

# read the raster file
with rasterio.open(file_path) as src:
    # Read the raster band as separate variable
    ndvi_global = src.read(1)
    # Get the metadata of the raster
    ndvi_meta = src.meta
    raster_transform = src.transform

with rasterio.open(file_path_ndvi_aug2018) as src:
    # Read the raster band as separate variable
    ndvi_aug2018 = src.read(1)
    # Get the metadata of the raster
    ndvi_meta_aug2018 = src.meta

with rasterio.open(file_path_ndvi_feb2019) as src:
    # Read the raster band as separate variable
    ndvi_feb2019 = src.read(1)
    # Get the metadata of the raster
    ndvi_meta_feb2019 = src.meta
Code
print(ndvi_meta)
print(raster_transform)
print(ndvi_global.shape)

# Replace NaNs in the original array with -1, which represents water
ndvi_global = np.nan_to_num(ndvi_global, nan=-1.0)
ndvi_aug2018 = np.nan_to_num(ndvi_aug2018, nan=-1.0)
ndvi_feb2019 = np.nan_to_num(ndvi_feb2019, nan=-1.0)

# create water mask
water_mask = ndvi_global == -1.0

# from the stack of local layers
ndvi_max = 0.8220
ndvi_min = -0.2772

ndvi_global_tens = torch.from_numpy(ndvi_global)
ndvi_aug2018_tens = torch.from_numpy(ndvi_aug2018)
ndvi_feb2019_tens = torch.from_numpy(ndvi_feb2019)

# Normalizing the data
ndvi_global_norm = (ndvi_global_tens - ndvi_min) / (ndvi_max - ndvi_min)
ndvi_aug2018_norm = (ndvi_aug2018_tens - ndvi_min) / (ndvi_max - ndvi_min)
ndvi_feb2019_norm = (ndvi_feb2019_tens - ndvi_min) / (ndvi_max - ndvi_min)

plt.imshow(ndvi_global_norm.numpy())
plt.colorbar()  
plt.show()

plt.imshow(ndvi_aug2018_norm.numpy())
plt.colorbar()
plt.show()

plt.imshow(ndvi_feb2019_norm.numpy())
plt.colorbar()
plt.show()
{'driver': 'GTiff', 'dtype': 'float32', 'nodata': nan, 'width': 2400, 'height': 2280, 'count': 1, 'crs': CRS.from_wkt('LOCAL_CS["GDA94 / Geoscience Australia Lambert",UNIT["metre",1,AUTHORITY["EPSG","9001"]],AXIS["Easting",EAST],AXIS["Northing",NORTH]]'), 'transform': Affine(25.0, 0.0, 0.0,
       0.0, -25.0, -1406000.0)}
| 25.00, 0.00, 0.00|
| 0.00,-25.00,-1406000.00|
| 0.00, 0.00, 1.00|
(2280, 2400)

Canopy cover

Code
file_path = '../mapping/cropped rasters/canopy_cover.tif'
# read the raster file
with rasterio.open(file_path) as src:
    # Read the raster band as separate variable
    canopy_global = src.read(1)
    # Get the metadata of the raster
    canopy_meta = src.meta
Code
print(canopy_meta)
print(canopy_global.shape)

# from the stack of local layers
canopy_max = 82.5000
canopy_min = 0.0

canopy_global_tens = torch.from_numpy(canopy_global)

# Normalizing the data
canopy_global_norm = (canopy_global_tens - canopy_min) / (canopy_max - canopy_min)

plt.imshow(canopy_global_norm.numpy())
plt.colorbar()  
plt.show()
{'driver': 'GTiff', 'dtype': 'float32', 'nodata': -3.3999999521443642e+38, 'width': 2400, 'height': 2280, 'count': 1, 'crs': CRS.from_wkt('LOCAL_CS["GDA94 / Geoscience Australia Lambert",UNIT["metre",1,AUTHORITY["EPSG","9001"]],AXIS["Easting",EAST],AXIS["Northing",NORTH]]'), 'transform': Affine(25.0, 0.0, 0.0,
       0.0, -25.0, -1406000.0)}
(2280, 2400)

Herbaceous vegetation

Code
file_path = '../mapping/cropped rasters/veg_herby.tif'
# read the raster file
with rasterio.open(file_path) as src:
    # Read the raster band as separate variable
    herby_global = src.read(1)
    # Get the metadata of the raster
    herby_meta = src.meta
Code
print(herby_meta)
print(herby_global.shape)

# from the stack of local layers
herby_max = 1.0
herby_min = 0.0

herby_global_tens = torch.from_numpy(herby_global)

# Normalizing the data
herby_global_norm = (herby_global_tens - herby_min) / (herby_max - herby_min)

plt.imshow(herby_global_norm.numpy())
plt.colorbar()  
plt.show()
{'driver': 'GTiff', 'dtype': 'float32', 'nodata': -3.3999999521443642e+38, 'width': 2400, 'height': 2280, 'count': 1, 'crs': CRS.from_wkt('LOCAL_CS["GDA94 / Geoscience Australia Lambert",UNIT["metre",1,AUTHORITY["EPSG","9001"]],AXIS["Easting",EAST],AXIS["Northing",NORTH]]'), 'transform': Affine(25.0, 0.0, 0.0,
       0.0, -25.0, -1406000.0)}
(2280, 2400)

Slope

Code
file_path = '../mapping/cropped rasters/slope.tif'
# read the raster file
with rasterio.open(file_path) as src:
    # Read the raster band as separate variable
    slope_global = src.read(1)
    # Get the metadata of the raster
    slope_meta = src.meta
    slope_transform = src.transform # same as the raster transform in the NDVI raster read

# print(slope_global)
print(slope_transform)
| 25.00, 0.00, 0.00|
| 0.00,-25.00,-1406000.00|
| 0.00, 0.00, 1.00|
Code
print(slope_meta)
print(slope_global.shape)

# Replace NaNs in the original array with -1, which represents water
slope_global = np.nan_to_num(slope_global, nan=0.0)

# from the stack of local layers
slope_max = 12.2981
slope_min = 0.0006

slope_global_tens = torch.from_numpy(slope_global)

# Normalizing the data
slope_global_norm = (slope_global_tens - slope_min) / (slope_max - slope_min)

plt.imshow(slope_global_tens.numpy())
plt.colorbar()  
plt.show()
{'driver': 'GTiff', 'dtype': 'float32', 'nodata': nan, 'width': 2400, 'height': 2280, 'count': 1, 'crs': CRS.from_wkt('LOCAL_CS["GDA94 / Geoscience Australia Lambert",UNIT["metre",1,AUTHORITY["EPSG","9001"]],AXIS["Easting",EAST],AXIS["Northing",NORTH]]'), 'transform': Affine(25.0, 0.0, 0.0,
       0.0, -25.0, -1406000.0)}
(2280, 2400)

Testing functions for selecting subsets of the raster layer, using torch objects

Code
# Create a figure and axis with matplotlib
fig, ax = plt.subplots(figsize=(7.5, 7.5))

# Plot the raster
show(slope_global, transform=raster_transform, ax=ax, cmap='viridis')

# Set the title and labels
ax.set_title('Raster with Geographic Coordinates')
ax.set_xlabel('Longitude')
ax.set_ylabel('Latitude')

# Show the plot
plt.show()

Code
x, y = 5.9e4, -1.447e6
print(x, y)

# Convert geographic coordinates to pixel coordinates
px, py = ~raster_transform * (x, y)
# Round pixel coordinates to integers
px, py = int(round(px)), int(round(py))

# Print the pixel coordinates   
print(px, py)
59000.0 -1447000.0
2360 1640

Running the model on the layers

Set the device for the model

Code
# train on the GPU or on the CPU, if a GPU is not available
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
print(f"Using {device} device")
Using cpu device

Load the model

As described in the deepSSF_train.ipynb script, we saved the model definition into a file named deepSSF_model.py. We can instantiate the model by importing the file (which was done when importing other packages) and calling the classes parameter dictionary from that script.

Code
params = deepSSF_model.ModelParams(deepSSF_model.params_dict)
model = deepSSF_model.ConvJointModel(params).to(device)
print(model)
ConvJointModel(
  (scalar_grid_output): Scalar_to_Grid_Block()
  (conv_habitat): Conv2d_block_spatial(
    (conv2d): Sequential(
      (0): Conv2d(8, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): ReLU()
      (2): Conv2d(4, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (3): ReLU()
      (4): Conv2d(4, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    )
  )
  (conv_movement): Conv2d_block_toFC(
    (conv2d): Sequential(
      (0): Conv2d(8, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): ReLU()
      (2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
      (3): Conv2d(4, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (4): ReLU()
      (5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
      (6): Flatten(start_dim=1, end_dim=-1)
    )
  )
  (fcn_movement_all): FCN_block_all_movement(
    (ffn): Sequential(
      (0): Linear(in_features=2500, out_features=128, bias=True)
      (1): Dropout(p=0.1, inplace=False)
      (2): ReLU()
      (3): Linear(in_features=128, out_features=128, bias=True)
      (4): Dropout(p=0.1, inplace=False)
      (5): ReLU()
      (6): Linear(in_features=128, out_features=12, bias=True)
    )
  )
  (movement_grid_output): Params_to_Grid_Block()
)
Code
# # load the model weights
# print(model.state_dict())
model.load_state_dict(torch.load(f'model_checkpoints/checkpoint_CNN_buffalo2005_2025-01-31.pt', 
                                 map_location=torch.device('cpu'),
                                 weights_only=True))
# print(model.state_dict())
# model.eval()
<All keys matched successfully>

Run CNN component on landscape

Select a smaller extent of the landscape

Code
# from the buffalo data
buffer = 1250
min_x = min(buffalo_df['x_']) - buffer
max_x = max(buffalo_df['x_']) + buffer
min_y = min(buffalo_df['y_']) - buffer
max_y = max(buffalo_df['y_']) + buffer

# custom extent
# min_x = 28148.969145
# max_x = 47719.496935
# min_y = -1442210.335861
# max_y = -1433133.681746

# Convert geographic coordinates to pixel coordinates
min_px, min_py = ~raster_transform * (min_x, min_y)
print(min_px, min_py)
max_px, max_py = ~raster_transform * (max_x, max_y)
print(max_px, max_py)
# Round pixel coordinates to integers
min_px, max_px, min_py, max_py = int(round(min_px)), int(round(max_px)), int(round(min_py)), int(round(max_py))

# Print the pixel coordinates   
print(min_px, max_px, min_py, max_py)
1091.3389860694217 1546.6350942641802
2166.104503472696 888.6517715825685
1091 2166 1547 889

Subset all layers

Code
# Initialize the subset array with zeros (or any other padding value)
# subset = np.zeros((max_px - min_px, min_py - max_py), dtype=slope_global.dtype)
subset = np.zeros((min_py - max_py, max_px - min_px), dtype=slope_global.dtype)

# subset = slope_global[min_px:max_px, min_py:max_py]
ndvi_subset = ndvi_global_norm[max_py:min_py, min_px:max_px]
ndvi_aug2018_subset = ndvi_aug2018_norm[max_py:min_py, min_px:max_px]
ndvi_feb2019_subset = ndvi_feb2019_norm[max_py:min_py, min_px:max_px]
canopy_subset = canopy_global_norm[max_py:min_py, min_px:max_px]
herby_subset = herby_global_norm[max_py:min_py, min_px:max_px]
slope_subset = slope_global_norm[max_py:min_py, min_px:max_px]

ndvi_subset_natural = ndvi_subset * (ndvi_max - ndvi_min) + ndvi_min
ndvi_aug2018_subset_natural = ndvi_aug2018_subset * (ndvi_max - ndvi_min) + ndvi_min
ndvi_feb2019_subset_natural = ndvi_feb2019_subset * (ndvi_max - ndvi_min) + ndvi_min
canopy_subset_natural = canopy_subset * (canopy_max - canopy_min) + canopy_min
herby_subset_natural = herby_subset * (herby_max - herby_min) + herby_min
slope_subset_natural = slope_subset * (slope_max - slope_min) + slope_min

# Find the global min and max values across all subsets
vmin = min(ndvi_subset_natural.min(), ndvi_aug2018_subset_natural.min(), ndvi_feb2019_subset_natural.min())
print(vmin)
vmax = max(ndvi_subset_natural.max(), ndvi_aug2018_subset_natural.max(), ndvi_feb2019_subset_natural.max())
print(vmax)



# Plot the subsets with the same color scale
plt.imshow(ndvi_subset_natural, cmap='viridis', vmin=vmin, vmax=vmax)
plt.colorbar(shrink=0.7)
plt.title(f'NDVI Late-dry season average 2018-19')
plt.savefig(f"outputs/dl_prob_maps_landscape/{buffalo_id}/id{buffalo_id}_ndvi_latedry_2018-19.png", dpi=600, bbox_inches='tight')
plt.show()
plt.close()  # Close the figure to free memory

plt.imshow(ndvi_aug2018_subset_natural, cmap='viridis', vmin=vmin, vmax=vmax)
plt.colorbar(shrink=0.7)
plt.title(f'NDVI August 2018')
plt.savefig(f"outputs/dl_prob_maps_landscape/{buffalo_id}/id{buffalo_id}_ndvi_aug2018.png", dpi=600, bbox_inches='tight')
plt.show()
plt.close()  # Close the figure to free memory

plt.imshow(ndvi_feb2019_subset_natural, cmap='viridis', vmin=vmin, vmax=vmax)
plt.colorbar(shrink=0.7)
plt.title(f'NDVI February 2019')
plt.savefig(f"outputs/dl_prob_maps_landscape/{buffalo_id}/id{buffalo_id}_ndvi_feb2019.png", dpi=600, bbox_inches='tight')
plt.show()
plt.close()  # Close the figure to free memory

# plot the subset
plt.imshow(canopy_subset_natural, cmap='viridis')
plt.colorbar(shrink=0.7)
plt.title(f'Canopy cover')
plt.savefig(f"outputs/dl_prob_maps_landscape/{buffalo_id}/id{buffalo_id}_canopy_cover.png", dpi=600, bbox_inches='tight')
plt.show()
plt.close()  # Close the figure to free memory

# plot the subset
plt.imshow(herby_subset_natural, cmap='viridis')
plt.colorbar(shrink=0.7)
plt.title(f'Herbaceous vegetation')
plt.savefig(f"outputs/dl_prob_maps_landscape/{buffalo_id}/id{buffalo_id}_herby_veg.png", dpi=600, bbox_inches='tight')
plt.show()
plt.close()  # Close the figure to free memory

# plot the subset
plt.imshow(slope_subset_natural, cmap='viridis')
plt.colorbar(shrink=0.7)
plt.title(f'Slope')
plt.savefig(f"outputs/dl_prob_maps_landscape/{buffalo_id}/id{buffalo_id}_slope.png", dpi=600, bbox_inches='tight')
plt.show()
plt.close()  # Close the figure to free memory
tensor(-0.2894)
tensor(0.7881)

Select layer extent

Code
# full extent
# Assuming ndvi_global_norm, canopy_global_norm, herby_global_norm, slope_global_norm, and params are already defined
# ndvi_landscape = ndvi_global_norm.to(params.device)
# print(ndvi_landscape.shape)
# canopy_landscape = canopy_global_norm.to(params.device)
# herby_landscape = herby_global_norm.to(params.device)
# slope_landscape = slope_global_norm.to(params.device)

# subset extent
# Assuming ndvi_subset, canopy_subset, herby_subset, slope_subset, and params are already defined

# ndvi_landscape = ndvi_subset.to(params.device)
ndvi_landscape = ndvi_aug2018_subset.to(params.device)
# ndvi_landscape = ndvi_feb2019_subset.to(params.device)
print(ndvi_landscape.shape)

canopy_landscape = canopy_subset.to(params.device)
herby_landscape = herby_subset.to(params.device)
slope_landscape = slope_subset.to(params.device)
torch.Size([658, 1075])
Code
# hour of the day (hour) 
hour_t2 = 6
# convert to sine and cosine
hour_t2_sin = np.sin(2*np.pi*hour_t2/24)
hour_t2_cos = np.cos(2*np.pi*hour_t2/24)

# day of the year (yday)
yday_t2 = 225
# convert to sine and cosine
yday_t2_sin = np.sin(2*np.pi*yday_t2/365)
yday_t2_cos = np.cos(2*np.pi*yday_t2/365)

# Convert lists to PyTorch tensors
hour_t2_sin_tensor = torch.tensor(hour_t2_sin).float().unsqueeze(0)
hour_t2_cos_tensor = torch.tensor(hour_t2_cos).float().unsqueeze(0)
yday_t2_sin_tensor = torch.tensor(yday_t2_sin).float().unsqueeze(0)
yday_t2_cos_tensor = torch.tensor(yday_t2_cos).float().unsqueeze(0)

def scalar_to_grid(x, dim_x, dim_y):
    scalar_map = x.view(1, 1, 1, 1).expand(1, 1, dim_x, dim_y)
    return scalar_map

# Stack tensors column-wise to create a tensor of shape 
scalar_covariates = torch.stack([hour_t2_sin_tensor, hour_t2_cos_tensor, yday_t2_sin_tensor, yday_t2_cos_tensor], dim=0)
print(scalar_covariates.shape)

scalar_grids = torch.cat([scalar_to_grid(tensor, ndvi_landscape.shape[0], ndvi_landscape.shape[1]) for tensor in scalar_covariates], dim=1)
print(scalar_grids.shape)
torch.Size([4, 1])
torch.Size([1, 4, 658, 1075])
Code
# Stack the channels along a new axis; here, 1 is commonly used for channel axis in PyTorch
landscape_stack = torch.stack([ndvi_landscape, canopy_landscape, herby_landscape, slope_landscape], dim=0)
landscape_stack = landscape_stack.unsqueeze(0)
print(landscape_stack.shape)

full_stack = torch.cat([landscape_stack, scalar_grids], dim=1)
print(full_stack.shape)
torch.Size([1, 4, 658, 1075])
torch.Size([1, 8, 658, 1075])
Code
landscape_predictions = model.conv_habitat(full_stack)
print(landscape_predictions.shape)
torch.Size([1, 658, 1075])
Code
# Assuming the output has the shape [batch_size, channels, height, width]
# For simplicity, let's visualize the first image in the batch and its first channel
output_image = landscape_predictions[0].detach().cpu().numpy()

# Create the mask for x and y coordinates
x_mask = np.ones_like(output_image)
y_mask = np.ones_like(output_image)
water_mask = np.ones_like(output_image)

y_dim = output_image.shape[0]
x_dim = output_image.shape[1]

buffer = 3

# Create the mask for x and y coordinates
x_mask[:, :buffer] = -np.inf
x_mask[:, x_dim-buffer:] = -np.inf
y_mask[:buffer, :] = -np.inf
y_mask[y_dim-buffer:, :] = -np.inf


# mask out cells on the edges that affect the colour scale
output_image = output_image * x_mask * y_mask
# output_image[ndvi_global == -1.0] = -np.inf

# Plot the image using matplotlib
filename_landscape_preds = f"outputs/dl_prob_maps_landscape/{buffalo_id}/id{buffalo_id}_hab_log_prob_norm_yday{yday_t2}_hour{hour_t2}.png"
# filename_landscape_preds = f"outputs/dl_prob_maps_landscape/id{buffalo_id}_hab_log_prob_norm_{hour_t2}_{today_date}.png"
plt.imshow(output_image, cmap='viridis')  # Use 'gray' for grayscale or change as needed
plt.colorbar()  # Optional: show the color scale bar
plt.title(f'Day of the year: {yday_t2}, Hour: {hour_t2}')
plt.savefig(filename_landscape_preds, dpi=600, bbox_inches='tight')
plt.show()
plt.close()  # Close the figure to free memory

# Plot the image using matplotlib
filename_landscape_preds = f"outputs/dl_prob_maps_landscape/{buffalo_id}/id{buffalo_id}_hab_log_prob_norm_yday{yday_t2}_hour{hour_t2}.png"
# filename_landscape_preds = f"outputs/dl_prob_maps_landscape/id{buffalo_id}_hab_log_prob_norm_{hour_t2}_{today_date}.png"
plt.imshow(np.exp(output_image), cmap='viridis')  # Use 'gray' for grayscale or change as needed
plt.colorbar()  # Optional: show the color scale bar
plt.title(f'Day of the year: {yday_t2}, Hour: {hour_t2}')
plt.savefig(filename_landscape_preds, dpi=600, bbox_inches='tight')
plt.show()
plt.close()  # Close the figure to free memory

Extracting values of the predictions to assess influence of landscape features

Code
import torch
import pandas as pd
import matplotlib.pyplot as plt

# Assuming ndvi_landscape, canopy_landscape, herby_landscape, slope_landscape, and landscape_predictions are already defined tensors
# Flatten the tensors and convert to numpy arrays
ndvi_array = ndvi_landscape.detach().numpy()
ndvi_array = ndvi_array * x_mask * y_mask
ndvi_array = ndvi_array.flatten()

canopy_array = canopy_landscape.detach().numpy()
canopy_array = canopy_array * x_mask * y_mask
canopy_array = canopy_array.flatten()

herby_array = herby_landscape.detach().numpy()
herby_array = herby_array * x_mask * y_mask
herby_array = herby_array.flatten()

slope_array = slope_landscape.detach().numpy()
slope_array = slope_array * x_mask * y_mask
slope_array = slope_array.flatten()

landscape_predictions_array = torch.exp(landscape_predictions).detach().numpy()
landscape_predictions_array = landscape_predictions_array * x_mask * y_mask
landscape_predictions_array = landscape_predictions_array.flatten()


# Create a single DataFrame with updated column names
df = pd.DataFrame({
    'NDVI': ndvi_array,
    'Canopy_cover': canopy_array,
    'Herbaceous_vegetation': herby_array,
    'Slope': slope_array,
    'Predictions': landscape_predictions_array
})

# Export the DataFrame to a CSV file
df.to_csv(f"outputs/id{buffalo_id}_habitat_suitability_landscape_subset.csv", index=False)

# Plot the data with plt.scatter
plt.figure(figsize=(10, 6))
plt.scatter(df['NDVI'], df['Predictions'], alpha=0.5)
plt.title('NDVI vs Predictions')
plt.xlabel('NDVI')
plt.ylabel('Predictions')
plt.grid(True)
plt.show()

plt.figure(figsize=(10, 6))
plt.scatter(df['Canopy_cover'], df['Predictions'], alpha=0.5)
plt.title('Canopy cover vs Predictions')
plt.xlabel('Canopy cover')
plt.ylabel('Predictions')
plt.grid(True)
plt.show()

plt.figure(figsize=(10, 6))
plt.scatter(df['Herbaceous_vegetation'], df['Predictions'], alpha=0.5)
plt.title('Herbaceous vegetation vs Predictions')
plt.xlabel('Herbaceous vegetation')
plt.ylabel('Predictions')
plt.grid(True)
plt.show()

plt.figure(figsize=(10, 6))
plt.scatter(df['Slope'], df['Predictions'], alpha=0.5)
plt.title('Slope vs Predictions')
plt.xlabel('Slope')
plt.ylabel('Predictions')
plt.grid(True)
plt.show()
C:\Users\for329\AppData\Local\Temp\ipykernel_31184\68050730.py:12: RuntimeWarning: invalid value encountered in multiply
  canopy_array = canopy_array * x_mask * y_mask
C:\Users\for329\AppData\Local\Temp\ipykernel_31184\68050730.py:16: RuntimeWarning: invalid value encountered in multiply
  herby_array = herby_array * x_mask * y_mask

Together to loop over hours

Code
# Initialize global min and max values
global_vmin = float('inf')
global_vmax = float('-inf')
Code
# Create a single DataFrame with updated column names
df_hourly = pd.DataFrame({
    'NDVI': ndvi_array,
    'Canopy_cover': canopy_array,
    'Herbaceous_vegetation': herby_array,
    'Slope': slope_array
})

# Define the range of hours you want to loop over
hours = range(1,25) # to start at 1
# hours = np.arange(0,24, 0.1)

# Initialize lists to store results
scalar_grids_list = []
landscape_predictions_list = []

for hour_t2 in hours:
    # convert to sine and cosine
    hour_t2_sin = np.sin(2*np.pi*hour_t2/24)
    hour_t2_cos = np.cos(2*np.pi*hour_t2/24)

    # day of the year (yday)
    # yday_t2 = 45
    # convert to sine and cosine
    yday_t2_sin = np.sin(2*np.pi*yday_t2/365)
    yday_t2_cos = np.cos(2*np.pi*yday_t2/365)

    # Convert lists to PyTorch tensors
    hour_t2_sin_tensor = torch.tensor(hour_t2_sin).float().unsqueeze(0)
    hour_t2_cos_tensor = torch.tensor(hour_t2_cos).float().unsqueeze(0)
    yday_t2_sin_tensor = torch.tensor(yday_t2_sin).float().unsqueeze(0)
    yday_t2_cos_tensor = torch.tensor(yday_t2_cos).float().unsqueeze(0)

    # Stack tensors column-wise to create a tensor of shape 
    scalar_covariates = torch.stack([hour_t2_sin_tensor, hour_t2_cos_tensor, yday_t2_sin_tensor, yday_t2_cos_tensor], dim=0)
    scalar_grids = torch.cat([scalar_to_grid(tensor, ndvi_landscape.shape[0], ndvi_landscape.shape[1]) for tensor in scalar_covariates], dim=1)

    full_stack = torch.cat([landscape_stack, scalar_grids], dim=1)
    # print(full_stack.shape)

    landscape_predictions = model.conv_habitat(full_stack)
    # print(landscape_predictions.shape)

    # Assuming the output has the shape [batch_size, channels, height, width]
    # For simplicity, let's visualize the first image in the batch and its first channel
    output_image = landscape_predictions[0].detach().cpu().numpy()

    # mask out cells on the edges that affect the colour scale
    output_image = output_image * x_mask * y_mask

    # Check if output_image is valid before updating global min and max
    if output_image.size > 0:
        # Ignore masked values in the calculation
        valid_values = output_image[np.isfinite(output_image)]
        if valid_values.size > 0:
            current_min = valid_values.min()
            current_max = valid_values.max()
            global_vmin = min(global_vmin, current_min)
            global_vmax = max(global_vmax, current_max)

    predictions_array = np.exp(output_image).flatten()
    df_hourly[f'{hour_t2}'] = predictions_array

    # Save the figure
    filename_landscape_preds = f"outputs/dl_prob_maps_landscape/{buffalo_id}/id{buffalo_id}_landscape_habitat_selection_yday{yday_t2}_hour{hour_t2}.png"
    # filename_landscape_preds = f"outputs/dl_prob_maps_landscape/id{buffalo_id}_landscape_habitat_selection_{hour_t2}_{today_date}.png"
    plt.figure()  # Create a new figure
    # plt.imshow(output_image)
    plt.imshow(output_image, vmin=global_vmin, vmax=global_vmax)
    # plt.imshow(output_image, transform=raster_transform, vmin=global_vmin, vmax=global_vmax)
    plt.colorbar(shrink=0.7)
    plt.title(f'Day of the year: {yday_t2}, Hour: {hour_t2}')
    plt.savefig(filename_landscape_preds, dpi=1000, bbox_inches='tight')
    plt.show()
    plt.close()  # Close the figure to free memory

    # # Save the figure
    # filename_landscape_preds = f"outputs/dl_prob_maps_landscape/{buffalo_id}/id{buffalo_id}_exp_landscape_habitat_selection_yday{yday_t2}_hour{hour_t2}.png"
    # # filename_landscape_preds = f"outputs/dl_prob_maps_landscape/id{buffalo_id}_exp_landscape_habitat_selection_{hour_t2}_{today_date}.png"
    # plt.figure()  # Create a new figure
    # # plt.imshow(np.exp(output_image))
    # plt.imshow(np.exp(output_image), vmin=np.exp(global_vmin), vmax=np.exp(global_vmax))
    # # plt.imshow(np.exp(output_image), transform=raster_transform, vmin=np.exp(global_vmin), vmax=np.exp(global_vmax))
    # plt.colorbar(shrink=0.7)
    # plt.title(f'Day of the year: {yday_t2}, Hour: {hour_t2}')
    # plt.savefig(filename_landscape_preds, dpi=600, bbox_inches='tight')
    # plt.show()
    # plt.close()  # Close the figure to free memory

print(global_vmin, global_vmax)


# Export the DataFrame to a CSV file
df_hourly.to_csv(f"outputs/id{buffalo_id}_hourly_habitat_suitability_landscape_subset_yday{yday_t2}.csv", index=False)

# # Randomly sample rows from the df_hourly DataFrame
# sampled_df_hourly = df_hourly.sample(frac=0.1, random_state=42)  # Adjust the fraction as needed
# # Export the sampled DataFrame to a CSV file
# sampled_df_hourly.to_csv(f"outputs/id{buffalo_id}_hourly_fine_habitat_suitability_landscape_subset.csv", index=False)

-19.016916 -10.933311

References

Fortin, Daniel, Hawthorne L Beyer, Mark S Boyce, Douglas W Smith, Thierry Duchesne, and Julie S Mao. 2005. “Wolves Influence Elk Movements: Behavior Shapes a Trophic Cascade in Yellowstone National Park.” Ecology 86 (5): 1320–30. https://doi.org/10.1890/04-0953.
Rhodes, Jonathan R, Clive A McAlpine, Daniel Lunney, and Hugh P Possingham. 2005. “A Spatially Explicit Habitat Selection Model Incorporating Home Range Behavior.” Ecology 86 (5): 1199–1205. https://doi.org/10.1890/04-0912.
Signer, Johannes, John Fieberg, and Tal Avgar. 2017. “Estimating Utilization Distributions from Fitted Step‐selection Functions.” Ecosphere 8 (4): e01771. https://doi.org/10.1002/ecs2.1771.